首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5967篇
  免费   399篇
  国内免费   53篇
财政金融   562篇
工业经济   308篇
计划管理   1671篇
经济学   1250篇
综合类   429篇
运输经济   96篇
旅游经济   124篇
贸易经济   977篇
农业经济   435篇
经济概况   567篇
  2024年   13篇
  2023年   150篇
  2022年   130篇
  2021年   244篇
  2020年   344篇
  2019年   296篇
  2018年   252篇
  2017年   294篇
  2016年   239篇
  2015年   248篇
  2014年   419篇
  2013年   625篇
  2012年   436篇
  2011年   431篇
  2010年   342篇
  2009年   279篇
  2008年   308篇
  2007年   290篇
  2006年   229篇
  2005年   183篇
  2004年   132篇
  2003年   123篇
  2002年   88篇
  2001年   71篇
  2000年   60篇
  1999年   32篇
  1998年   46篇
  1997年   25篇
  1996年   24篇
  1995年   14篇
  1994年   11篇
  1993年   7篇
  1992年   6篇
  1991年   5篇
  1990年   4篇
  1989年   1篇
  1988年   4篇
  1987年   3篇
  1986年   4篇
  1985年   4篇
  1983年   2篇
  1980年   1篇
排序方式: 共有6419条查询结果,搜索用时 140 毫秒
51.
Recent contributions to growth theory stress the importance of localized innovation for the performance of more backward countries. In earlier papers, analyses by means of DEA techniques confirmed this intuition. In this paper, we extend this type of analysis by relaxing the macroeconomic viewpoint adopted until now. New databases on output, labor and capital input in the agricultural and manufacturing sectors are developed for 40 countries. Using intertemporal DEA, it is found that changes in the global production frontier are localized at high levels of capital intensity. This result is stronger in agriculture than in manufacturing. Further, a decomposition of labor productivity growth in eight Asian countries for the period 1975–1992 into the effects of capital intensification, learning and innovation is made. The results suggest that there is a particular development path in which increases in capital intensity appear to be a prerequisite to benefit from international technology spillovers.JEL Classification: O14, O30, O40, O47  相似文献   
52.
We propose an extension to the basic DEA models that guarantees that if an intensity is positive then it must be at least as large as a pre-defined lower bound. This requirement adds an integer programming constraint known within Operations Research as a Fixed-Charge (FC) type of constraint. Accordingly, we term the new model DEA_FC. The proposed model lies between the DEA models that allow units to be scaled arbitrarily low, and the Free Disposal Hull model that allows no scaling. We analyze 18 datasets from the literature to demonstrate that sufficiently low intensities—those for which the scaled Decision-Making Unit (DMU) has inputs and outputs that lie below the minimum values observed—are pervasive, and that the new model ensures fairer comparisons without sacrificing the required discriminating power. We explain why the low-intensity phenomenon exists. In sharp contrast to standard DEA models we demonstrate via examples that an inefficient DMU may play a pivotal role in determining the technology. We also propose a goal programming model that determines how deviations from the lower bounds affect efficiency, which we term the trade-off between the deviation gap and the efficiency gap.  相似文献   
53.
Selecting Sites for New Facilities Using Data Envelopment Analysis   总被引:3,自引:0,他引:3  
This paper develops a mathematical programming model for obtaining a best set of sites for planned facilities. The model is concerned with those situations where resource constraints are present. The specific setting for the paper involves the selection of sites for a set of retail outlets, wherein the ratio of aggregate outputs to inputs for the selected set is maximal among all possible sets that could be chosen. At the same time, the model guarantees that the only sets of stores allowable are those for which the available resources are used to the maximum extent possible.  相似文献   
54.
This paper deals with a dynamic adjustment process in which adjustment of a key variable input (labor) towards its desired level is modeled in a panel data context. The partial adjustment type model is extended to make the adjustment parameter both firm- and time-specific by specifying it as a function of firm- and time-specific variables. Desired level of labor use is represented by a labor requirement function, which is a function of outputs and other firm-specific variables. The catch-up factor is defined as the ratio of actual to desired level of employment. Productivity growth is then defined in terms of a shift in the desired level of labor use and the change in the catch-up factor. Swedish banking data is used as an application of the above model.  相似文献   
55.
The purpose of this paper is to discuss the use of Value Efficiency Analysis (VEA) in efficiency evaluation when preference information is taken into account. Value efficiency analysis is an approach, which applies the ideas developed for Multiple Objective Linear Programming (MOLP) to Data Envelopment Analysis (DEA). Preference information is given through the desirable structure of input- and output-values. The same values can be used for all units under evaluation or the values can be specific for each unit. A decision-maker can specify the input- and output-values subjectively without any support or (s)he can use a multiple criteria support system to assist him/her to find those values on the efficient frontier. The underlying assumption is that the most preferred values maximize the decision-maker's implicitly known value function in a production possibility set or a subset. The purpose of value efficiency analysis is to estimate a need to increase outputs and/or decrease inputs for reaching the indifference contour of the value function at the optimum. In this paper, we briefly review the main ideas in value efficiency analysis and discuss practical aspects related to the use of value efficiency analysis. We also consider some extensions.  相似文献   
56.
Deterministic frontier analysis (DFA), stochastic frontier analysis (SFA), and data envelopment analysis (DEA) are alternative analytical techniques designed to measure the efficiency of producers. All three techniques were originally developed within a cross-sectional context, in which the objective is to compare the efficiencies of producers. More recently all three techniques have been extended for use in a panel data context. In the latter context it is possible to measure productivity change, and to decompose measured productivity change into its sources, one of which is efficiency change. However when efficiency measurement techniques, particularly SFA, have been applied to panel data, it has infrequently been made clear what the objective of the analysis is: the measurement of efficiency, which may vary through time as well as across producers, or the measurement and decomposition of productivity change. In this paper I explore the use of each technique in a panel data context. I find DFA and DEA to have achieved a more satisfactory reorientation toward productivity measurement than SFA has.  相似文献   
57.
This article provides a series of reflections on the practice of carrying out processual research on organisational change. At a broad level, some of the main tasks associated with conducting company case studies are described and the benefits of this approach for dealing with complex change data are outlined. At a more specific level, the article addresses three main areas tied to the actual “doing” of processual research. First, the notion of tacit knowledge and “getting your hands dirty” by engaging in ongoing in-depth fieldwork. Second, the design and implementation of a longitudinal case study research programme. Third, the advantages and concerns of combining a range of different data collecting techniques in carrying out processual studies. Overall, the main intention is to provide some useful reflections and practical insights, as well as providing something of the flavour of carrying out this type of research.  相似文献   
58.
This paper describes Bayesian methods for life test planning with Type II censored data from a Weibull distribution, when the Weibull shape parameter is given. We use conjugate prior distributions and criteria based on estimating a quantile of interest of the lifetime distribution. One criterion is based on a precision factor for a credibility interval for a distribution quantile and the other is based on the length of the credibility interval. We provide simple closed form expressions for the relationship between the needed number of failures and the precision criteria. Examples are used to illustrate the results.Received: October 2002 / Revised: March 2004  相似文献   
59.
This paper discusses consequences of violating the normal distribution assumption imbedded in Structural Equation Modeling (SEM). Based on real data from a large sample customer satisfaction survey we follow the procedures as suggested in leading textbooks. We document consequences of this practice and discuss its impact on decision making in marketing.  相似文献   
60.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号